Skip to content

[AArch64] Check for negative numbers when adjusting icmps #140999

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

AZero13
Copy link
Contributor

@AZero13 AZero13 commented May 22, 2025

We can catch negatives that can be encoded in cmn this way!

@llvmbot
Copy link
Member

llvmbot commented May 22, 2025

@llvm/pr-subscribers-backend-aarch64

Author: AZero13 (AZero13)

Changes

We can catch negatives that can be encoded in cmn this way!


Full diff: https://github.com/llvm/llvm-project/pull/140999.diff

2 Files Affected:

  • (modified) llvm/lib/Target/AArch64/AArch64ISelLowering.cpp (+34-28)
  • (modified) llvm/test/CodeGen/AArch64/cmp-to-cmn.ll (+184)
diff --git a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
index b7f0bcfd015bc..e896717d4a06d 100644
--- a/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
+++ b/llvm/lib/Target/AArch64/AArch64ISelLowering.cpp
@@ -3647,6 +3647,16 @@ static bool isLegalArithImmed(uint64_t C) {
   return IsLegal;
 }
 
+bool isLegalCmpImmed(int64_t Immed) {
+  if (Immed == std::numeric_limits<int64_t>::min()) {
+    LLVM_DEBUG(dbgs() << "Illegal add imm " << Immed
+                      << ": avoid UB for INT64_MIN\n");
+    return false;
+  }
+  // Same encoding for add/sub, just flip the sign.
+  return isLegalArithImmed((uint64_t)std::abs(Immed));
+}
+
 static bool cannotBeIntMin(SDValue CheckedVal, SelectionDAG &DAG) {
   KnownBits KnownSrc = DAG.computeKnownBits(CheckedVal);
   return !KnownSrc.getSignedMinValue().isMinSignedValue();
@@ -4077,52 +4087,53 @@ static SDValue getAArch64Cmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
                              const SDLoc &dl) {
   if (ConstantSDNode *RHSC = dyn_cast<ConstantSDNode>(RHS.getNode())) {
     EVT VT = RHS.getValueType();
-    uint64_t C = RHSC->getZExtValue();
-    if (!isLegalArithImmed(C)) {
+    int64_t C = RHSC->getSExtValue();
+    if (!isLegalCmpImmed(C)) {
       // Constant does not fit, try adjusting it by one?
       switch (CC) {
       default:
         break;
       case ISD::SETLT:
       case ISD::SETGE:
-        if ((VT == MVT::i32 && C != 0x80000000 &&
-             isLegalArithImmed((uint32_t)(C - 1))) ||
-            (VT == MVT::i64 && C != 0x80000000ULL &&
-             isLegalArithImmed(C - 1ULL))) {
+        if ((VT == MVT::i32 && C != INT32_MIN && isLegalCmpImmed(C - 1)) ||
+            (VT == MVT::i64 && C != INT64_MIN && isLegalCmpImmed(C - 1))) {
           CC = (CC == ISD::SETLT) ? ISD::SETLE : ISD::SETGT;
-          C = (VT == MVT::i32) ? (uint32_t)(C - 1) : C - 1;
+          C = C - 1;
+          if (VT == MVT::i32)
+            C &= 0xFFFFFFFF;
           RHS = DAG.getConstant(C, dl, VT);
         }
         break;
       case ISD::SETULT:
       case ISD::SETUGE:
-        if ((VT == MVT::i32 && C != 0 &&
-             isLegalArithImmed((uint32_t)(C - 1))) ||
-            (VT == MVT::i64 && C != 0ULL && isLegalArithImmed(C - 1ULL))) {
+        if ((VT == MVT::i32 && C != 0 && isLegalCmpImmed(C - 1)) ||
+            (VT == MVT::i64 && C != 0 && isLegalCmpImmed(C - 1))) {
           CC = (CC == ISD::SETULT) ? ISD::SETULE : ISD::SETUGT;
-          C = (VT == MVT::i32) ? (uint32_t)(C - 1) : C - 1;
+          C = C - 1;
+          if (VT == MVT::i32)
+            C &= 0xFFFFFFFF;
           RHS = DAG.getConstant(C, dl, VT);
         }
         break;
       case ISD::SETLE:
       case ISD::SETGT:
-        if ((VT == MVT::i32 && C != INT32_MAX &&
-             isLegalArithImmed((uint32_t)(C + 1))) ||
-            (VT == MVT::i64 && C != INT64_MAX &&
-             isLegalArithImmed(C + 1ULL))) {
+        if ((VT == MVT::i32 && C != INT32_MAX && isLegalCmpImmed(C + 1)) ||
+            (VT == MVT::i64 && C != INT64_MAX && isLegalCmpImmed(C + 1))) {
           CC = (CC == ISD::SETLE) ? ISD::SETLT : ISD::SETGE;
-          C = (VT == MVT::i32) ? (uint32_t)(C + 1) : C + 1;
+          C = C + 1;
+          if (VT == MVT::i32)
+            C &= 0xFFFFFFFF;
           RHS = DAG.getConstant(C, dl, VT);
         }
         break;
       case ISD::SETULE:
       case ISD::SETUGT:
-        if ((VT == MVT::i32 && C != UINT32_MAX &&
-             isLegalArithImmed((uint32_t)(C + 1))) ||
-            (VT == MVT::i64 && C != UINT64_MAX &&
-             isLegalArithImmed(C + 1ULL))) {
+        if ((VT == MVT::i32 && C != -1 && isLegalCmpImmed(C + 1)) ||
+            (VT == MVT::i64 && C != -1 && isLegalCmpImmed(C + 1))) {
           CC = (CC == ISD::SETULE) ? ISD::SETULT : ISD::SETUGE;
-          C = (VT == MVT::i32) ? (uint32_t)(C + 1) : C + 1;
+          C = C + 1;
+          if (VT == MVT::i32)
+            C &= 0xFFFFFFFF;
           RHS = DAG.getConstant(C, dl, VT);
         }
         break;
@@ -4141,7 +4152,7 @@ static SDValue getAArch64Cmp(SDValue LHS, SDValue RHS, ISD::CondCode CC,
   // can be turned into:
   //    cmp     w12, w11, lsl #1
   if (!isa<ConstantSDNode>(RHS) ||
-      !isLegalArithImmed(RHS->getAsAPIntVal().abs().getZExtValue())) {
+      !isLegalCmpImmed(RHS->getAsAPIntVal().getSExtValue())) {
     bool LHSIsCMN = isCMN(LHS, CC, DAG);
     bool RHSIsCMN = isCMN(RHS, CC, DAG);
     SDValue TheLHS = LHSIsCMN ? LHS.getOperand(1) : LHS;
@@ -17673,12 +17684,7 @@ bool AArch64TargetLowering::isLegalAddImmediate(int64_t Immed) const {
     return false;
   }
   // Same encoding for add/sub, just flip the sign.
-  Immed = std::abs(Immed);
-  bool IsLegal = ((Immed >> 12) == 0 ||
-                  ((Immed & 0xfff) == 0 && Immed >> 24 == 0));
-  LLVM_DEBUG(dbgs() << "Is " << Immed
-                    << " legal add imm: " << (IsLegal ? "yes" : "no") << "\n");
-  return IsLegal;
+  return isLegalArithImmed((uint64_t)std::abs(Immed));
 }
 
 bool AArch64TargetLowering::isLegalAddScalableImmediate(int64_t Imm) const {
diff --git a/llvm/test/CodeGen/AArch64/cmp-to-cmn.ll b/llvm/test/CodeGen/AArch64/cmp-to-cmn.ll
index e87d43161a895..6b08e4b37190e 100644
--- a/llvm/test/CodeGen/AArch64/cmp-to-cmn.ll
+++ b/llvm/test/CodeGen/AArch64/cmp-to-cmn.ll
@@ -430,3 +430,187 @@ entry:
   %cmp = icmp ne i32 %conv, %add
   ret i1 %cmp
 }
+
+define i1 @cmn_large_imm(i32 %a) {
+; CHECK-LABEL: cmn_large_imm:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov w8, #64765 // =0xfcfd
+; CHECK-NEXT:    movk w8, #64764, lsl #16
+; CHECK-NEXT:    cmp w0, w8
+; CHECK-NEXT:    cset w0, gt
+; CHECK-NEXT:    ret
+  %cmp = icmp sgt i32 %a, -50529027
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_slt(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_slt:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov w8, #4097 // =0x1001
+; CHECK-NEXT:    movk w8, #65281, lsl #16
+; CHECK-NEXT:    cmp w0, w8
+; CHECK-NEXT:    cset w0, lt
+; CHECK-NEXT:    ret
+  %cmp = icmp slt i32 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_slt_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_slt_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov x8, #-61439 // =0xffffffffffff1001
+; CHECK-NEXT:    movk x8, #65281, lsl #16
+; CHECK-NEXT:    cmp x0, x8
+; CHECK-NEXT:    cset w0, lt
+; CHECK-NEXT:    ret
+  %cmp = icmp slt i64 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_sge(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_sge:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn w0, #4079, lsl #12 // =16707584
+; CHECK-NEXT:    cset w0, gt
+; CHECK-NEXT:    ret
+  %cmp = icmp sge i32 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_sge_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_sge_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn x0, #4079, lsl #12 // =16707584
+; CHECK-NEXT:    cset w0, gt
+; CHECK-NEXT:    ret
+  %cmp = icmp sge i64 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_uge(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_uge:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn w0, #4079, lsl #12 // =16707584
+; CHECK-NEXT:    cset w0, hi
+; CHECK-NEXT:    ret
+  %cmp = icmp uge i32 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_uge_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_uge_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn x0, #4079, lsl #12 // =16707584
+; CHECK-NEXT:    cset w0, hi
+; CHECK-NEXT:    ret
+  %cmp = icmp uge i64 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_ult(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_ult:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov w8, #4097 // =0x1001
+; CHECK-NEXT:    movk w8, #65281, lsl #16
+; CHECK-NEXT:    cmp w0, w8
+; CHECK-NEXT:    cset w0, lo
+; CHECK-NEXT:    ret
+  %cmp = icmp ult i32 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_ult_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_ult_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov x8, #-61439 // =0xffffffffffff1001
+; CHECK-NEXT:    movk x8, #65281, lsl #16
+; CHECK-NEXT:    cmp x0, x8
+; CHECK-NEXT:    cset w0, lo
+; CHECK-NEXT:    ret
+  %cmp = icmp ult i64 %x, -16707583
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_sle(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_sle:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn w0, #4095, lsl #12 // =16773120
+; CHECK-NEXT:    cset w0, lt
+; CHECK-NEXT:    ret
+  %cmp = icmp sle i32 %x, -16773121
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_sle_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_sle_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn x0, #4095, lsl #12 // =16773120
+; CHECK-NEXT:    cset w0, lt
+; CHECK-NEXT:    ret
+  %cmp = icmp sle i64 %x, -16773121
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_sgt(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_sgt:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov w8, #-16773121 // =0xff000fff
+; CHECK-NEXT:    cmp w0, w8
+; CHECK-NEXT:    cset w0, gt
+; CHECK-NEXT:    ret
+  %cmp = icmp sgt i32 %x, -16773121
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_sgt_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_sgt_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov x8, #-16773121 // =0xffffffffff000fff
+; CHECK-NEXT:    cmp x0, x8
+; CHECK-NEXT:    cset w0, gt
+; CHECK-NEXT:    ret
+  %cmp = icmp sgt i64 %x, -16773121
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_ule(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_ule:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn w0, #4095, lsl #12 // =16773120
+; CHECK-NEXT:    cset w0, lo
+; CHECK-NEXT:    ret
+  %cmp = icmp ule i32 %x, -16773121
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_ule_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_ule_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    cmn x0, #4095, lsl #12 // =16773120
+; CHECK-NEXT:    cset w0, lo
+; CHECK-NEXT:    ret
+  %cmp = icmp ule i64 %x, -16773121
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_ugt(i32 %x) {
+; CHECK-LABEL: almost_immediate_neg_ugt:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov w8, #-16773121 // =0xff000fff
+; CHECK-NEXT:    cmp w0, w8
+; CHECK-NEXT:    cset w0, hi
+; CHECK-NEXT:    ret
+  %cmp = icmp ugt i32 %x, -16773121
+  ret i1 %cmp
+}
+
+define i1 @almost_immediate_neg_ugt_64(i64 %x) {
+; CHECK-LABEL: almost_immediate_neg_ugt_64:
+; CHECK:       // %bb.0:
+; CHECK-NEXT:    mov x8, #-16773121 // =0xffffffffff000fff
+; CHECK-NEXT:    cmp x0, x8
+; CHECK-NEXT:    cset w0, hi
+; CHECK-NEXT:    ret
+  %cmp = icmp ugt i64 %x, -16773121
+  ret i1 %cmp
+}

@AZero13 AZero13 force-pushed the icmp-immediate branch 2 times, most recently from 6c39147 to 716a599 Compare May 22, 2025 04:58
@AZero13
Copy link
Contributor Author

AZero13 commented May 22, 2025

There are some regressions but this is purely luck; what is happening is that CSE happens when cmn 1 was adjusted to cmp 0 in some cases, particulary when cmp 0 allows the folding of some things from and to ands.

@AZero13 AZero13 changed the title Check for immediates using isLegalICmpImmediate [AArch64] Check for immediates using isLegalICmpImmediate May 22, 2025
@topperc
Copy link
Collaborator

topperc commented May 22, 2025

Your title says isLegalICmpImmediate which is the name of a function in TargetTransformInfo and TargetLowering, but this patch doesn't use that function.

AZero13 added a commit to AZero13/llvm-project that referenced this pull request May 23, 2025
This relies on the fact that we can use tst and ands for comparisons as by emitComparison.

Relies on llvm#140999.

Fixes: llvm#141137
AZero13 added a commit to AZero13/llvm-project that referenced this pull request May 23, 2025
This relies on the fact that we can use tst and ands for comparisons as by emitComparison.

Relies on llvm#140999.

Fixes: llvm#141137
@AZero13 AZero13 changed the title [AArch64] Check for immediates using isLegalICmpImmediate [AArch64] Check for negative numbers when adjusting icmps May 23, 2025
@AZero13
Copy link
Contributor Author

AZero13 commented May 23, 2025

Your title says isLegalICmpImmediate which is the name of a function in TargetTransformInfo and TargetLowering, but this patch doesn't use that function.

It was this way at first but then it didn't compile so I did this instead.

@AZero13
Copy link
Contributor Author

AZero13 commented May 23, 2025

Fix for regression is here and does in fact require this to be merged first.

It is a different PR because techically this is another optimization in itself that will transform cmn -1 or cmp 1 to cmp 0 when profitable.

#141151

AZero13 added a commit to AZero13/llvm-project that referenced this pull request May 23, 2025
This relies on the fact that we can use tst and ands for comparisons as by emitComparison.

Relies on llvm#140999.

Fixes: llvm#141137
@AZero13
Copy link
Contributor Author

AZero13 commented May 27, 2025

@topperc Thoughts on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants